574 research outputs found

    Cluster synchronization in an ensemble of neurons interacting through chemical synapses

    Full text link
    In networks of periodically firing spiking neurons that are interconnected with chemical synapses, we analyze cluster state, where an ensemble of neurons are subdivided into a few clusters, in each of which neurons exhibit perfect synchronization. To clarify stability of cluster state, we decompose linear stability of the solution into two types of stabilities: stability of mean state and stabilities of clusters. Computing Floquet matrices for these stabilities, we clarify the total stability of cluster state for any types of neurons and any strength of interactions even if the size of networks is infinitely large. First, we apply this stability analysis to investigating synchronization in the large ensemble of integrate-and-fire (IF) neurons. In one-cluster state we find the change of stability of a cluster, which elucidates that in-phase synchronization of IF neurons occurs with only inhibitory synapses. Then, we investigate entrainment of two clusters of IF neurons with different excitability. IF neurons with fast decaying synapses show the low entrainment capability, which is explained by a pitchfork bifurcation appearing in two-cluster state with change of synapse decay time constant. Second, we analyze one-cluster state of Hodgkin-Huxley (HH) neurons and discuss the difference in synchronization properties between IF neurons and HH neurons.Comment: Notation for Jacobi matrix is changed. Accepted for publication in Phys. Rev.

    The life of the cortical column: opening the domain of functional architecture of the cortex

    Get PDF
    The concept of the cortical column refers to vertical cell bands with similar response properties, which were initially observed by Vernon Mountcastle’s mapping of single cell recordings in the cat somatic cortex. It has subsequently guided over 50 years of neuroscientific research, in which fundamental questions about the modularity of the cortex and basic principles of sensory information processing were empirically investigated. Nevertheless, the status of the column remains controversial today, as skeptical commentators proclaim that the vertical cell bands are a functionally insignificant by-product of ontogenetic development. This paper inquires how the column came to be viewed as an elementary unit of the cortex from Mountcastle’s discovery in 1955 until David Hubel and Torsten Wiesel’s reception of the Nobel Prize in 1981. I first argue that Mountcastle’s vertical electrode recordings served as criteria for applying the column concept to electrophysiological data. In contrast to previous authors, I claim that this move from electrophysiological data to the phenomenon of columnar responses was concept-laden, but not theory-laden. In the second part of the paper, I argue that Mountcastle’s criteria provided Hubel Wiesel with a conceptual outlook, i.e. it allowed them to anticipate columnar patterns in the cat and macaque visual cortex. I argue that in the late 1970s, this outlook only briefly took a form that one could call a ‘theory’ of the cerebral cortex, before new experimental techniques started to diversify column research. I end by showing how this account of early column research fits into a larger project that follows the conceptual development of the column into the present

    Modular organization enhances the robustness of attractor network dynamics

    Full text link
    Modular organization characterizes many complex networks occurring in nature, including the brain. In this paper we show that modular structure may be responsible for increasing the robustness of certain dynamical states of such systems. In a neural network model with threshold-activated binary elements, we observe that the basins of attractors, corresponding to patterns that have been embedded using a learning rule, occupy maximum volume in phase space at an optimal modularity. Simultaneously, the convergence time to these attractors decreases as a result of cooperative dynamics between the modules. The role of modularity in increasing global stability of certain desirable attractors of a system may provide a clue to its evolution and ubiquity in natural systems.Comment: 6 pages, 4 figure

    Potential role of monkey inferior parietal neurons coding action semantic equivalences as precursors of parts of speech

    Get PDF
    The anterior portion of the inferior parietal cortex possesses comprehensive representations of actions embedded in behavioural contexts. Mirror neurons, which respond to both self-executed and observed actions, exist in this brain region in addition to those originally found in the premotor cortex. We found that parietal mirror neurons responded differentially to identical actions embedded in different contexts. Another type of parietal mirror neuron represents an inverse and complementary property of responding equally to dissimilar actions made by itself and others for an identical purpose. Here, we propose a hypothesis that these sets of inferior parietal neurons constitute a neural basis for encoding the semantic equivalence of various actions across different agents and contexts. The neurons have mirror neuron properties, and they encoded generalization of agents, differentiation of outcomes, and categorization of actions that led to common functions. By integrating the activities of these mirror neurons with various codings, we further suggest that in the ancestral primates' brains, these various representations of meaningful action led to the gradual establishment of equivalence relations among the different types of actions, by sharing common action semantics. Such differential codings of the components of actions might represent precursors to the parts of protolanguage, such as gestural communication, which are shared among various members of a society. Finally, we suggest that the inferior parietal cortex serves as an interface between this action semantics system and other higher semantic systems, through common structures of action representation that mimic language syntax

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Cortical depth dependent functional responses in humans at 7T: improved specificity with 3D GRASE

    Get PDF
    Ultra high fields (7T and above) allow functional imaging with high contrast-to-noise ratios and improved spatial resolution. This, along with improved hardware and imaging techniques, allow investigating columnar and laminar functional responses. Using gradient-echo (GE) (T2* weighted) based sequences, layer specific responses have been recorded from human (and animal) primary visual areas. However, their increased sensitivity to large surface veins potentially clouds detecting and interpreting layer specific responses. Conversely, spin-echo (SE) (T2 weighted) sequences are less sensitive to large veins and have been used to map cortical columns in humans. T2 weighted 3D GRASE with inner volume selection provides high isotropic resolution over extended volumes, overcoming some of the many technical limitations of conventional 2D SE-EPI, whereby making layer specific investigations feasible. Further, the demonstration of columnar level specificity with 3D GRASE, despite contributions from both stimulated echoes and conventional T2 contrast, has made it an attractive alternative over 2D SE-EPI. Here, we assess the spatial specificity of cortical depth dependent 3D GRASE functional responses in human V1 and hMT by comparing it to GE responses. In doing so we demonstrate that 3D GRASE is less sensitive to contributions from large veins in superficial layers, while showing increased specificity (functional tuning) throughout the cortex compared to GE

    The influence of anesthetics, neurotransmitters and antibiotics on the relaxation processes in lipid membranes

    Get PDF
    In the proximity of melting transitions of artificial and biological membranes fluctuations in enthalpy, area, volume and concentration are enhanced. This results in domain formation, changes of the elastic constants, changes in permeability and slowing down of relaxation processes. In this study we used pressure perturbation calorimetry to investigate the relaxation time scale after a jump into the melting transition regime of artificial lipid membranes. This time corresponds to the characteristic rate of domain growth. The studies were performed on single-component large unilamellar and multilamellar vesicle systems with and without the addition of small molecules such as general anesthetics, neurotransmitters and antibiotics. These drugs interact with membranes and affect melting points and profiles. In all systems we found that heat capacity and relaxation times are related to each other in a simple manner. The maximum relaxation time depends on the cooperativity of the heat capacity profile and decreases with a broadening of the transition. For this reason the influence of a drug on the time scale of domain formation processes can be understood on the basis of their influence on the heat capacity profile. This allows estimations of the time scale of domain formation processes in biological membranes.Comment: 12 pages, 6 figure

    Scalable Massively Parallel Artificial Neural Networks

    Full text link
    There is renewed interest in computational intelligence, due to advances in algorithms, neuroscience, and computer hardware. In addition there is enormous interest in autonomous vehicles (air, ground, and sea) and robotics, which need significant onboard intelligence. Work in this area could not only lead to better understanding of the human brain but also very useful engineering applications. The functioning of the human brain is not well understood, but enormous progress has been made in understanding it and, in particular, the neocortex. There are many reasons to develop models of the brain. Artificial Neural Networks (ANN), one type of model, can be very effective for pattern recognition, function approximation, scientific classification, control, and the analysis of time series data. ANNs often use the back-propagation algorithm for training, and can require large training times especially for large networks, but there are many other types of ANNs. Once the network is trained for a particular problem, however, it can produce results in a very short time. Parallelization of ANNs could drastically reduce the training time. An object-oriented, massively-parallel ANN (Artificial Neural Network) software package SPANN (Scalable Parallel Artificial Neural Network) has been developed and is described here. MPI was use

    Invariant computations in local cortical networks with balanced excitation and inhibition

    Get PDF
    [Abstract] Cortical computations critically involve local neuronal circuits. The computations are often invariant across a cortical area yet are carried out by networks that can vary widely within an area according to its functional architecture. Here we demonstrate a mechanism by which orientation selectivity is computed invariantly in cat primary visual cortex across an orientation preference map that provides a wide diversity of local circuits. Visually evoked excitatory and inhibitory synaptic conductances are balanced exquisitely in cortical neurons and thus keep the spike response sharply tuned at all map locations. This functional balance derives from spatially isotropic local connectivity of both excitatory and inhibitory cells. Modeling results demonstrate that such covariation is a signature of recurrent rather than purely feed-forward processing and that the observed isotropic local circuit is sufficient to generate invariant spike tuning
    corecore